12 research outputs found

    Analysis of Petri Net Models through Stochastic Differential Equations

    Full text link
    It is well known, mainly because of the work of Kurtz, that density dependent Markov chains can be approximated by sets of ordinary differential equations (ODEs) when their indexing parameter grows very large. This approximation cannot capture the stochastic nature of the process and, consequently, it can provide an erroneous view of the behavior of the Markov chain if the indexing parameter is not sufficiently high. Important phenomena that cannot be revealed include non-negligible variance and bi-modal population distributions. A less-known approximation proposed by Kurtz applies stochastic differential equations (SDEs) and provides information about the stochastic nature of the process. In this paper we apply and extend this diffusion approximation to study stochastic Petri nets. We identify a class of nets whose underlying stochastic process is a density dependent Markov chain whose indexing parameter is a multiplicative constant which identifies the population level expressed by the initial marking and we provide means to automatically construct the associated set of SDEs. Since the diffusion approximation of Kurtz considers the process only up to the time when it first exits an open interval, we extend the approximation by a machinery that mimics the behavior of the Markov chain at the boundary and allows thus to apply the approach to a wider set of problems. The resulting process is of the jump-diffusion type. We illustrate by examples that the jump-diffusion approximation which extends to bounded domains can be much more informative than that based on ODEs as it can provide accurate quantity distributions even when they are multi-modal and even for relatively small population levels. Moreover, we show that the method is faster than simulating the original Markov chain

    Heat release by controlled continuous-time Markov jump processes

    Full text link
    We derive the equations governing the protocols minimizing the heat released by a continuous-time Markov jump process on a one-dimensional countable state space during a transition between assigned initial and final probability distributions in a finite time horizon. In particular, we identify the hypotheses on the transition rates under which the optimal control strategy and the probability distribution of the Markov jump problem obey a system of differential equations of Hamilton-Bellman-Jacobi-type. As the state-space mesh tends to zero, these equations converge to those satisfied by the diffusion process minimizing the heat released in the Langevin formulation of the same problem. We also show that in full analogy with the continuum case, heat minimization is equivalent to entropy production minimization. Thus, our results may be interpreted as a refined version of the second law of thermodynamics.Comment: final version, section 2.1 revised, 26 pages, 3 figure

    Large Deviations Analysis of Extinction in Branching Models

    No full text
    Cramer's classical theorem is applied to obtain large deviations in branching processes. This is a new avenue for analysis of models in discrete and continuous time. For the Galton-Watson process a new formula for the rate function in terms of the Legendre transform of its offspring distribution is derived. Further analysis of the approximate path to extinction produces a new interesting formula.Cramer's theorem, extinction, Galton-Watson process, large deviations, Legendre transform,
    corecore